本文探讨了从视觉变压器查找最佳子模型的可行性,并引入了纯Vision变压器减肥(VIT-SLIM)框架,可以在跨多个维度从原始模型的端到端搜索这样的子结构,包括输入令牌,MHSA和MLP模块,具有最先进的性能。我们的方法基于学习和统一的L1稀疏限制,具有预定的因素,以反映不同维度的连续搜索空间中的全局重要性。通过单次训练方案,搜索过程非常有效。例如,在DeIT-S中,VIT-SLIM仅需要〜43 GPU小时进行搜索过程,并且搜索结构具有灵活的不同模块中的多维尺寸。然后,根据运行设备上的精度折叠折衷的要求采用预算阈值,并执行重新训练过程以获得最终模型。广泛的实验表明,我们的耐比可以压缩高达40%的参数和40%的视觉变压器上的40%拖鞋,同时在Imagenet上提高了〜0.6%的精度。我们还展示了我们搜索模型在几个下游数据集中的优势。我们的源代码将公开提供。
translated by 谷歌翻译
当前的最新异常检测(AD)方法利用了大规模成像网训练产生的强大表示。然而,灾难性忘记阻止了半监督环境中新数据集上预训练的表示的成功进行微调,因此通常是固定的。在我们的工作中,我们提出了一种新方法来克服灾难性的遗忘,从而成功地对转移学习环境中的AD进行了预先训练的表示。具体而言,我们基于生成和判别建模之间的联系,为正常类诱导多元高斯分布,并将正常图像的Mahalanobis距离与估计分布作为训练目标。我们还建议在验证方案中使用通常用于最小化的替代风险的增强,以检测灾难性遗忘的发作。对公共MVTEC数据集的广泛评估表明,我们的AD任务中的方法可以实现新的最新技术,同时实现与先前的艺术状态相当的异常细分性能。此外,消融研究证明了诱导的高斯分布以及拟议的微调方案在选择增强方面的鲁棒性的重要性。
translated by 谷歌翻译
Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
Automated offensive language detection is essential in combating the spread of hate speech, particularly in social media. This paper describes our work on Offensive Language Identification in low resource Indic language Marathi. The problem is formulated as a text classification task to identify a tweet as offensive or non-offensive. We evaluate different mono-lingual and multi-lingual BERT models on this classification task, focusing on BERT models pre-trained with social media datasets. We compare the performance of MuRIL, MahaTweetBERT, MahaTweetBERT-Hateful, and MahaBERT on the HASOC 2022 test set. We also explore external data augmentation from other existing Marathi hate speech corpus HASOC 2021 and L3Cube-MahaHate. The MahaTweetBERT, a BERT model, pre-trained on Marathi tweets when fine-tuned on the combined dataset (HASOC 2021 + HASOC 2022 + MahaHate), outperforms all models with an F1 score of 98.43 on the HASOC 2022 test set. With this, we also provide a new state-of-the-art result on HASOC 2022 / MOLD v2 test set.
translated by 谷歌翻译
Pre-training large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. Although this method has proven to be effective for many domains, it might not always provide desirable benefits. In this paper, we study the effects of hateful pre-training on low-resource hate speech classification tasks. While previous studies on the English language have emphasized its importance, we aim to augment their observations with some non-obvious insights. We evaluate different variations of tweet-based BERT models pre-trained on hateful, non-hateful, and mixed subsets of a 40M tweet dataset. This evaluation is carried out for the Indian languages Hindi and Marathi. This paper is empirical evidence that hateful pre-training is not the best pre-training option for hate speech detection. We show that pre-training on non-hateful text from the target domain provides similar or better results. Further, we introduce HindTweetBERT and MahaTweetBERT, the first publicly available BERT models pre-trained on Hindi and Marathi tweets, respectively. We show that they provide state-of-the-art performance on hate speech classification tasks. We also release hateful BERT for the two languages and a gold hate speech evaluation benchmark HateEval-Hi and HateEval-Mr consisting of manually labeled 2000 tweets each. The models and data are available at https://github.com/l3cube-pune/MarathiNLP .
translated by 谷歌翻译
语言模型是使用大量通用数据(如Book Copus,Common Crawl和Wikipedia)进行预训练的,这对于模型了解语言的语言特征至关重要。新的研究建议将域自适应预训练(DAPT)和任务自适应预训练(TAPT)作为最终填充任务之前的中间步骤。此步骤有助于涵盖目标域词汇,并改善下游任务的模型性能。在这项工作中,我们仅研究训练在TAPT和特定于任务的填充过程中嵌入层对模型性能的影响。基于我们的研究,我们提出了一种简单的方法,以通过对BERT层进行选择性预训练,使基于BERT的模型的中间步骤更有效。我们表明,在TAPT期间仅训练BERT嵌入层足以适应目标域的词汇并实现可比的性能。我们的方法在计算上是有效的,在TAPT期间训练了78%的参数。所提出的嵌入层列式方法也可以是一种有效的域适应技术。
translated by 谷歌翻译
随着计算机视觉技术的进步,根据其功能对图像进行分类的需求已成为一项巨大的任务和必要性。在此项目中,我们提出了2种模型,即使用ORB和SVM的特征提取和分类,第二个是使用CNN体系结构。该项目的最终结果是了解特征提取和图像分类背后的概念。训练有素的CNN模型还将用于将其转换为用于Android开发的TFLITE格式。
translated by 谷歌翻译
在这个项目中,我们提出了一个CNN架构来检测异常和可疑活动。为该项目选择的活动正在公共场所开展,跳跃和踢球,并在公共场所携带枪支,蝙蝠和刀。通过训练有素的模型,我们将其与Yolo,VGG16,VGG19等先前的模型进行了比较。然后实现训练有素的模型进行实时检测,并使用。训练有素的.H5模型的TFLITE格式以构建Android分类。
translated by 谷歌翻译
拟议的购物助理模型SANIP将帮助盲人检测手持有的物体,并从检测到的对象中获取信息的视频反馈。提出的模型由三个Python模型组成,即自定义对象检测,文本检测和条形码检测。为了检测手持对象,我们创建了自己的自定义数据集,该数据集包括Parle-G,Tide和Lays等日常商品。除此之外,我们还收集了购物车和出口标志的图像,因为对于任何人来说,使用购物车都至关重要,并且在紧急情况下还要注意出口标志。对于其他2个模型,提出的文本和条形码信息将从文本转换为语音,并传达给盲人。该模型用于检测经过训练并成功地检测和识别所需输出的对象,其精度和精确度良好。
translated by 谷歌翻译